Talk by Peter Loxley, of the University of New Mexico, Los Alamos Campus; and Center for Nonlinear Studies, LANL. Given to the Redwood Center for Theoretical Neuroscience at UC Berkeley.
Abstract: The two-dimensional Gabor function is adapted to natural image statistics by learning the joint distribution of the Gabor function parameters. This joint distribution is subsequently modeled to yield analytically-tractable generative models of simple-cell receptive fields. Learning a basis of Gabor functions takes an order of magnitude fewer computations than an equivalent non-parameterized basis. Derived learning rules are shown to be capable of adapting Gabor parameters to the statistics of images of man-made and natural environments. Different tuning strategies are found by controlling learning through the Gabor parameter learning rates. Two opposing strategies include either well-resolved orientation, or well-resolved spatial frequency. Three key Gabor parameters are found to be characterized by non-uniform marginal distributions with heavy tails, and strong correlations in the joint distribution. On image reconstruction, a generative Gabor model with fitted marginal distributions is shown to significantly outperform a Gabor model with uniformly sampled parameters. An additional increase in performance results when the correlations are modeled. However, the best generative model does not yet achieve the same performance as the learned model. A comparison with estimates for biological simple cells shows that the Gabor function adapted to natural image statistics correctly predicts many receptive field properties.